Goto

Collaborating Authors

 leaky relu function



A Appendix

Neural Information Processing Systems

Finally, when using different values for P, we can get other group actions. Let us first show that (2) and (3) correspond to a particular case of Cohen et al. This proves (2) and (3). In both subcases, by Lemma 4, θ must be a leaky ReLu function. Given a non-equivariant model, a simple way to let it "learn" to be equivariant is to train it with This doubles the size of the training set, which increases the training time.


Efficient Quantum Circuits for Machine Learning Activation Functions including Constant T-depth ReLU

Zi, Wei, Wang, Siyi, Kim, Hyunji, Sun, Xiaoming, Chattopadhyay, Anupam, Rebentrost, Patrick

arXiv.org Artificial Intelligence

In recent years, Quantum Machine Learning (QML) has increasingly captured the interest of researchers. Among the components in this domain, activation functions hold a fundamental and indispensable role. Our research focuses on the development of activation functions quantum circuits for integration into fault-tolerant quantum computing architectures, with an emphasis on minimizing $T$-depth. Specifically, we present novel implementations of ReLU and leaky ReLU activation functions, achieving constant $T$-depths of 4 and 8, respectively. Leveraging quantum lookup tables, we extend our exploration to other activation functions such as the sigmoid. This approach enables us to customize precision and $T$-depth by adjusting the number of qubits, making our results more adaptable to various application scenarios. This study represents a significant advancement towards enhancing the practicality and application of quantum machine learning.